17 research outputs found

    A logic of default justifications

    Get PDF

    A logic of default justifications

    Get PDF

    Reasoning with Defeasible Reasons

    Get PDF
    The information age is marked by a tremendous amount of incoming information. Even so, we are almost always dealing with information that is only incomplete or even conflicting. This thesis investigates the logical principles behind our abilities to find out the right answers and recover from errors that we make in drawing hasty conclusions. Consider, for example, the following headline: "NASA warns of an asteroid capable of ending human civilization approaching''. The headline gives you a reason to conclude that the Earth is on a collision course. However, were you to read below the headline that although the asteroid is approaching closer, it will pass by the Earth at a distance over sixteen times farther than our Moon, you would doubt your reasons to conclude that the collision is about to happen. This commonsense ability to question old reasons in the wake of new information is known as "defeasibility'' of reasons. Defeasible reasons came to the attention of AI researchers who realized that the design of intelligent computer programs requires principled understanding of our commonsense abilities. The relevance of commonsense reasoning is nowadays emphasized by the need to increase the transparency of AI systems, but also by the fact that AI systems still underperform in commonsense reasoning tasks.This thesis investigates the role of logic in commonsense reasoning. Firstly, it develops logical systems that are successful in modeling defeasible and commonsense reasoning. Secondly, the thesis shows why commonsense reasoners are bound to reason logically, despite being prone to errors

    Reifying default reasons in justification logic

    Get PDF
    The main goal of this paper is to argue that justification logic advances the formal study of default reasons. After introducing a variant of justification logic with default reasons, we first show how the logic can be used to model undercutting attacks and exclusionary reasons. Then we compare this logic to Reiter’s default logic interpreted as an argumentation framework. The comparison is done by analyzing differences in the way in which process trees are built for the two logics

    Structured argumentation dynamics: Undermining attacks in default justification logic

    Get PDF
    This paper develops a logical theory that unifies all three standard types of argumentative attack in AI, namely rebutting, undercutting and undermining attacks. We build on default justification logic that already represents undercutting and rebutting attacks, and we add undermining attacks. Intuitively, undermining does not target default inference, as undercutting, or default conclusion, as rebutting, but rather attacks an argument’s premise as a starting point for default reasoning. In default justification logic, reasoning starts from a set of premises, which is then extended by conclusions that hold by default. We argue that modeling undermining defeaters in the view of default theories requires changing the set of premises upon receiving new information. To model changes to premises, we give a dynamic aspect to default justification logic by using the techniques from the logic of belief revision. More specifically, undermining is modeled with belief revision operations that include contracting a set of premises, that is, removing some information from it. The novel combination of default reasoning and belief revision in justification logic enriches both approaches to reasoning under uncertainty. By the end of the paper, we show some important aspects of defeasible argumentation in which our logic compares favorably to structured argumentation frameworks

    Online Handbook of Argumentation for AI: Volume 2

    Get PDF
    Editors: Federico Castagna, Francesca Mosca, Jack Mumford, Stefan Sarkadi and Andreas Xydis.This volume contains revised versions of the papers selected for the second volume of the Online Handbook of Argumentation for AI (OHAAI). Previously, formal theories of argument and argument interaction have been proposed and studied, and this has led to the more recent study of computational models of argument. Argumentation, as a field within artificial intelligence (AI), is highly relevant for researchers interested in symbolic representations of knowledge and defeasible reasoning. The purpose of this handbook is to provide an open access and curated anthology for the argumentation research community. OHAAI is designed to serve as a research hub to keep track of the latest and upcoming PhD-driven research on the theory and application of argumentation in all areas related to AI
    corecore